risk analysis
Artificial Intelligence and Nuclear Weapons Proliferation: The Technological Arms Race for (In)visibility
Allison, David M., Herzog, Stephen
A robust nonproliferation regime has contained the spread of nuclear weapons to just nine states. Yet, emerging and disruptive technologies are reshaping the landscape of nuclear risks, presenting a critical juncture for decision makers. This article lays out the contours of an overlooked but intensifying technological arms race for nuclear (in)visibility, driven by the interplay between proliferation-enabling technologies (PETs) and detection-enhancing technologies (DETs). We argue that the strategic pattern of proliferation will be increasingly shaped by the innovation pace in these domains. Artificial intelligence (AI) introduces unprecedented complexity to this equation, as its rapid scaling and knowledge substitution capabilities accelerate PET development and challenge traditional monitoring and verification methods. To analyze this dynamic, we develop a formal model centered on a Relative Advantage Index (RAI), quantifying the shifting balance between PETs and DETs. Our model explores how asymmetric technological advancement, particularly logistic AI-driven PET growth versus stepwise DET improvements, expands the band of uncertainty surrounding proliferation detectability. Through replicable scenario-based simulations, we evaluate the impact of varying PET growth rates and DET investment strategies on cumulative nuclear breakout risk. We identify a strategic fork ahead, where detection may no longer suffice without broader PET governance. Governments and international organizations should accordingly invest in policies and tools agile enough to keep pace with tomorrow's technology.
- Europe > Switzerland > Zürich > Zürich (0.14)
- Asia > North Korea (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.14)
- (18 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
- Energy > Power Industry > Utilities > Nuclear (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Case-Based Reasoning (0.46)
From nuclear safety to LLM security: Applying non-probabilistic risk management strategies to build safe and secure LLM-powered systems
Gutfraind, Alexander, Bier, Vicki
Large language models (LLMs) offer unprecedented and growing capabilities, but also introduce complex safety and security challenges that resist conventional risk management. While conventional probabilistic risk analysis (PRA) requires exhaustive risk enu meration and quantification, the novelty and complexity of these systems make PRA impractical, particularly against adaptive adversaries. Previous research found that risk management in various fields of engineering such as nuclear or civil engineering is often solved by generic (i.e. Here we show how emerging risks in LLM - powered systems could be met with 100+ of these non - probabilistic strategies to risk management, including risks from adaptive adversaries. The strategies are divided into five categories and are mapped to LLM secur ity (and AI safety more broadly). We also present an LLM - powered workflow for applying these strategies and other workflows suitable for solution architec ts. Overall, these strategies could contribute (despite some limitations) to security, safety and other dimensions of responsible AI.
- North America > United States > Illinois > Cook County > Chicago (0.40)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New Jersey > Hudson County > Hoboken (0.04)
- (14 more...)
- Information Technology > Security & Privacy (1.00)
- Energy > Power Industry > Utilities > Nuclear (0.46)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Adaptive Deep Learning for Multiclass Breast Cancer Classification via Misprediction Risk Analysis
Sheeraz, Gul, Chen, Qun, Feiyu, Liu, MD, Zhou Fengjin
Breast cancer remains one of the leading causes of cancer-related deaths worldwide. Early detection is crucial for improving patient outcomes, yet the diagnostic process is often complex and prone to inconsistencies among pathologists. Computer-aided diagnostic approaches have significantly enhanced breast cancer detection, particularly in binary classification (benign vs. malignant). However, these methods face challenges in multiclass classification, leading to frequent mispredictions. In this work, we propose a novel adaptive learning approach for multiclass breast cancer classification using H&E-stained histopathology images. First, we introduce a misprediction risk analysis framework that quantifies and ranks the likelihood of an image being mislabeled by a classifier. This framework leverages an interpretable risk model that requires only a small number of labeled samples for training. Next, we present an adaptive learning strategy that fine-tunes classifiers based on the specific characteristics of a given dataset. This approach minimizes misprediction risk, allowing the classifier to adapt effectively to the target workload. We evaluate our proposed solutions on real benchmark datasets, demonstrating that our risk analysis framework more accurately identifies mispredictions compared to existing methods. Furthermore, our adaptive learning approach significantly improves the performance of state-of-the-art deep neural network classifiers.
- Asia > China (0.04)
- South America > Peru > Lima Department > Lima Province > Lima (0.04)
- Europe > Portugal (0.04)
- Europe > France > Grand Est > Bas-Rhin > Strasbourg (0.04)
Risk Analysis of Flowlines in the Oil and Gas Sector: A GIS and Machine Learning Approach
Chittumuri, I., Alshehab, N., Voss, R. J., Douglass, L. L., Kamrava, S., Fan, Y., Miskimins, J., Fleckenstein, W., Bandyopadhyay, S.
This paper presents a risk analysis of flowlines in the oil and gas sector using Geographic Information Systems (GIS) and machine learning (ML). Flowlines, vital conduits transporting oil, gas, and water from wellheads to surface facilities, often face under-assessment compared to transmission pipelines. This study addresses this gap using advanced tools to predict and mitigate failures, improving environmental safety and reducing human exposure. Extensive datasets from the Colorado Energy and Carbon Management Commission (ECMC) were processed through spatial matching, feature engineering, and geometric extraction to build robust predictive models. Various ML algorithms, including logistic regression, support vector machines, gradient boosting decision trees, and K-Means clustering, were used to assess and classify risks, with ensemble classifiers showing superior accuracy, especially when paired with Principal Component Analysis (PCA) for dimensionality reduction. Finally, a thorough data analysis highlighted spatial and operational factors influencing risks, identifying high-risk zones for focused monitoring. Overall, the study demonstrates the transformative potential of integrating GIS and ML in flowline risk management, proposing a data-driven approach that emphasizes the need for accurate data and refined models to improve safety in petroleum extraction.
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Support Vector Machines (0.55)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Clustering (0.50)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.48)
On Large Language Models in Mission-Critical IT Governance: Are We Ready Yet?
Esposito, Matteo, Palagiano, Francesco, Lenarduzzi, Valentina, Taibi, Davide
Context. The security of critical infrastructure has been a pressing concern since the advent of computers and has become even more critical in today's era of cyber warfare. Protecting mission-critical systems (MCSs), essential for national security, requires swift and robust governance, yet recent events reveal the increasing difficulty of meeting these challenges. Aim. Building on prior research showcasing the potential of Generative AI (GAI), such as Large Language Models, in enhancing risk analysis, we aim to explore practitioners' views on integrating GAI into the governance of IT MCSs. Our goal is to provide actionable insights and recommendations for stakeholders, including researchers, practitioners, and policymakers. Method. We designed a survey to collect practical experiences, concerns, and expectations of practitioners who develop and implement security solutions in the context of MCSs. Conclusions and Future Works. Our findings highlight that the safe use of LLMs in MCS governance requires interdisciplinary collaboration. Researchers should focus on designing regulation-oriented models and focus on accountability; practitioners emphasize data protection and transparency, while policymakers must establish a unified AI framework with global benchmarks to ensure ethical and secure LLMs-based MCS governance.
- Europe > Finland > Northern Ostrobothnia > Oulu (0.05)
- Europe > Italy (0.04)
- Asia > China (0.04)
- (7 more...)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.67)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.34)
BPQP: A Differentiable Convex Optimization Framework for Efficient End-to-End Learning
Pan, Jianming, Ye, Zeqi, Yang, Xiao, Yang, Xu, Liu, Weiqing, Wang, Lewen, Bian, Jiang
Data-driven decision-making processes increasingly utilize end-to-end learnable deep neural networks to render final decisions. Sometimes, the output of the forward functions in certain layers is determined by the solutions to mathematical optimization problems, leading to the emergence of differentiable optimization layers that permit gradient back-propagation. However, real-world scenarios often involve large-scale datasets and numerous constraints, presenting significant challenges. Current methods for differentiating optimization problems typically rely on implicit differentiation, which necessitates costly computations on the Jacobian matrices, resulting in low efficiency. In this paper, we introduce BPQP, a differentiable convex optimization framework designed for efficient end-to-end learning. To enhance efficiency, we reformulate the backward pass as a simplified and decoupled quadratic programming problem by leveraging the structural properties of the KKT matrix. This reformulation enables the use of first-order optimization algorithms in calculating the backward pass gradients, allowing our framework to potentially utilize any state-of-the-art solver. As solver technologies evolve, BPQP can continuously adapt and improve its efficiency. Extensive experiments on both simulated and real-world datasets demonstrate that BPQP achieves a significant improvement in efficiency--typically an order of magnitude faster in overall execution time compared to other differentiable optimization layers. Our results not only highlight the efficiency gains of BPQP but also underscore its superiority over differentiable optimization layer baselines.
Introduction to AI Safety, Ethics, and Society
Artificial Intelligence is rapidly embedding itself within militaries, economies, and societies, reshaping their very foundations. Given the depth and breadth of its consequences, it has never been more pressing to understand how to ensure that AI systems are safe, ethical, and have a positive societal impact. This book aims to provide a comprehensive approach to understanding AI risk. Our primary goals include consolidating fragmented knowledge on AI risk, increasing the precision of core ideas, and reducing barriers to entry by making content simpler and more comprehensible. The book has been designed to be accessible to readers from diverse backgrounds. You do not need to have studied AI, philosophy, or other such topics. The content is skimmable and somewhat modular, so that you can choose which chapters to read. We introduce mathematical formulas in a few places to specify claims more precisely, but readers should be able to understand the main points without these.
- Asia > Russia (1.00)
- Asia > Middle East (0.92)
- Europe > United Kingdom > England (0.45)
- (3 more...)
- Workflow (1.00)
- Summary/Review (1.00)
- Research Report > Promising Solution (1.00)
- (5 more...)
- Transportation > Passenger (1.00)
- Transportation > Infrastructure & Services (1.00)
- Transportation > Ground > Road (1.00)
- (58 more...)
Enhancing Investment Analysis: Optimizing AI-Agent Collaboration in Financial Research
Han, Xuewen, Wang, Neng, Che, Shangkun, Yang, Hongyang, Zhang, Kunpeng, Xu, Sean Xin
In recent years, the application of generative artificial intelligence (GenAI) in financial analysis and investment decision-making has gained significant attention. However, most existing approaches rely on single-agent systems, which fail to fully utilize the collaborative potential of multiple AI agents. In this paper, we propose a novel multi-agent collaboration system designed to enhance decision-making in financial investment research. The system incorporates agent groups with both configurable group sizes and collaboration structures to leverage the strengths of each agent group type. By utilizing a sub-optimal combination strategy, the system dynamically adapts to varying market conditions and investment scenarios, optimizing performance across different tasks. We focus on three sub-tasks: fundamentals, market sentiment, and risk analysis, by analyzing the 2023 SEC 10-K forms of 30 companies listed on the Dow Jones Index. Our findings reveal significant performance variations based on the configurations of AI agents for different tasks. The results demonstrate that our multi-agent collaboration system outperforms traditional single-agent models, offering improved accuracy, efficiency, and adaptability in complex financial environments. This study highlights the potential of multi-agent systems in transforming financial analysis and investment decision-making by integrating diverse analytical perspectives.
- North America > United States > Maryland > Prince George's County > College Park (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > New York > Kings County > New York City (0.05)
- (3 more...)
Risk Analysis in Customer Relationship Management via Quantile Region Convolutional Neural Network-Long Short-Term Memory and Cross-Attention Mechanism
Huang, Yaowen, Der Leu, Jun, Lu, Baoli, Zhou, Yan
Northeastern University, San Jose, California, 95131, United States Abstract Risk analysis is an important business decision support task in customer relationship management (CRM), involving the identification of potential risks or challenges that may affect customer satisfaction, retention rates, and overall business performance. To enhance risk analysis in CRM, this paper combines the advantages of quantile region convolutional neural network-long short-term memory (QRCNN-LSTM) and cross-attention mechanisms for modeling. The QRCNN-LSTM model combines sequence modeling with deep learning architectures commonly used in natural language processing tasks, enabling the capture of both local and global dependencies in sequence data. The cross-attention mechanism enhances interactions between different input data parts, allowing the model to focus on specific areas or features relevant to CRM risk analysis. By applying QRCNN-LSTM and cross-attention mechanisms to CRM risk analysis, empirical evidence demonstrates that this approach can effectively identify potential risks and provide data-driven support for business decisions. Keywords: CRM, deep learning, QRCNN-LSTM, cross-attention mechanism, business decision Introduction In today's competitive business environment, customer relationship management (CRM) is one of the key factors for success (Haiyun et al., 2021). To provide personalized services, increase customer satisfaction, and improve sales performance, businesses need to deeply understand and analyze customer behavior and timely identify potential risks (Li et al., 2020b). Traditional risk analysis methods often rely on experience and intuition, but the development of deep learning and machine learning technologies offers new opportunities and challenges for risk analysis in CRM (Libai et al., 2020). Some commonly used deep learning or machine learning models are: Logistic regression model (Guerola-Navarro et al., 2021): Logistic regression is a widely used classification algorithm that can predict different risk categories. However, logistic regression models often fail to capture complex nonlinear relationships. Decision tree model (Chen et al., 2021): Decision tree models can generate easy-tounderstand rules and have good interpretability. But they tend to overfit when dealing with data that has complex structures and high-dimensional features. Random forest model (Rao et al., 2020): Random forests are an ensemble learning method that improves prediction performance by combining multiple decision tree models.
Speculations on Uncertainty and Humane Algorithms
The appreciation and utilisation of risk and uncertainty can play a key role in helping to solve some of the many ethical issues that are posed by AI. Understanding the uncertainties can allow algorithms to make better decisions by providing interrogatable avenues to check the correctness of outputs. Allowing algorithms to deal with variability and ambiguity with their inputs means they do not need to force people into uncomfortable classifications. Provenance enables algorithms to know what they know preventing possible harms. Additionally, uncertainty about provenance highlights the trustworthiness of algorithms. It is essential to compute with what we know rather than make assumptions that may be unjustified or untenable. This paper provides a perspective on the need for the importance of risk and uncertainty in the development of ethical AI, especially in high-risk scenarios. It argues that the handling of uncertainty, especially epistemic uncertainty, is critical to ensuring that algorithms do not cause harm and are trustworthy and ensure that the decisions that they make are humane.
- North America > United States > District of Columbia > Washington (0.04)
- South America > Brazil > Rio de Janeiro > Rio de Janeiro (0.04)
- North America > Canada (0.04)
- (14 more...)
- Transportation > Air (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- (4 more...)